125 research outputs found

    Dynamic Packet Scheduling in Wireless Networks

    Full text link
    We consider protocols that serve communication requests arising over time in a wireless network that is subject to interference. Unlike previous approaches, we take the geometry of the network and power control into account, both allowing to increase the network's performance significantly. We introduce a stochastic and an adversarial model to bound the packet injection. Although taken as the primary motivation, this approach is not only suitable for models based on the signal-to-interference-plus-noise ratio (SINR). It also covers virtually all other common interference models, for example the multiple-access channel, the radio-network model, the protocol model, and distance-2 matching. Packet-routing networks allowing each edge or each node to transmit or receive one packet at a time can be modeled as well. Starting from algorithms for the respective scheduling problem with static transmission requests, we build distributed stable protocols. This is more involved than in previous, similar approaches because the algorithms we consider do not necessarily scale linearly when scaling the input instance. We can guarantee a throughput that is as large as the one of the original static algorithm. In particular, for SINR models the competitive ratios of the protocol in comparison to optimal ones in the respective model are between constant and O(log^2 m) for a network of size m.Comment: 23 page

    Approximation Algorithms for Wireless Link Scheduling with Flexible Data Rates

    Full text link
    We consider scheduling problems in wireless networks with respect to flexible data rates. That is, more or less data can be transmitted per time depending on the signal quality, which is determined by the signal-to-interference-plus-noise ratio (SINR). Each wireless link has a utility function mapping SINR values to the respective data rates. We have to decide which transmissions are performed simultaneously and (depending on the problem variant) also which transmission powers are used. In the capacity-maximization problem, one strives to maximize the overall network throughput, i.e., the summed utility of all links. For arbitrary utility functions (not necessarily continuous ones), we present an O(log n)-approximation when having n communication requests. This algorithm is built on a constant-factor approximation for the special case of the respective problem where utility functions only consist of a single step. In other words, each link has an individual threshold and we aim at maximizing the number of links whose threshold is satisfied. On the way, this improves the result in [Kesselheim, SODA 2011] by not only extending it to individual thresholds but also showing a constant approximation factor independent of assumptions on the underlying metric space or the network parameters. In addition, we consider the latency-minimization problem. Here, each link has a demand, e.g., representing an amount of data. We have to compute a schedule of shortest possible length such that for each link the demand is fulfilled, that is the overall summed utility (or data transferred) is at least as large as its demand. Based on the capacity-maximization algorithm, we show an O(log^2 n)-approximation for this problem

    A Constant-Factor Approximation for Wireless Capacity Maximization with Power Control in the SINR Model

    Full text link
    In modern wireless networks, devices are able to set the power for each transmission carried out. Experimental but also theoretical results indicate that such power control can improve the network capacity significantly. We study this problem in the physical interference model using SINR constraints. In the SINR capacity maximization problem, we are given n pairs of senders and receivers, located in a metric space (usually a so-called fading metric). The algorithm shall select a subset of these pairs and choose a power level for each of them with the objective of maximizing the number of simultaneous communications. This is, the selected pairs have to satisfy the SINR constraints with respect to the chosen powers. We present the first algorithm achieving a constant-factor approximation in fading metrics. The best previous results depend on further network parameters such as the ratio of the maximum and the minimum distance between a sender and its receiver. Expressed only in terms of n, they are (trivial) Omega(n) approximations. Our algorithm still achieves an O(log n) approximation if we only assume to have a general metric space rather than a fading metric. Furthermore, by using standard techniques the algorithm can also be used in single-hop and multi-hop scheduling scenarios. Here, we also get polylog(n) approximations.Comment: 17 page

    Submodular Secretary Problems: Cardinality, Matching, and Linear Constraints

    Get PDF
    We study various generalizations of the secretary problem with submodular objective functions. Generally, a set of requests is revealed step-by-step to an algorithm in random order. For each request, one option has to be selected so as to maximize a monotone submodular function while ensuring feasibility. For our results, we assume that we are given an offline algorithm computing an alpha-approximation for the respective problem. This way, we separate computational limitations from the ones due to the online nature. When only focusing on the online aspect, we can assume alpha = 1. In the submodular secretary problem, feasibility constraints are cardinality constraints, or equivalently, sets are feasible if and only if they are independent sets of a k-uniform matroid. That is, out of a randomly ordered stream of entities, one has to select a subset of size k. For this problem, we present a 0.31alpha-competitive algorithm for all k, which asymptotically reaches competitive ratio alpha/e for large k. In submodular secretary matching, one side of a bipartite graph is revealed online. Upon arrival, each node has to be matched permanently to an offline node or discarded irrevocably. We give a 0.207alpha-competitive algorithm. This also covers the problem, in which sets of entities are feasible if and only if they are independent with respect to a transversal matroid. In both cases, we improve over previously best known competitive ratios, using a generalization of the algorithm for the classic secretary problem. Furthermore, we give an O(alpha d^(-2/(B-1)))-competitive algorithm for submodular function maximization subject to linear packing constraints. Here, d is the column sparsity, that is the maximal number of none-zero entries in a column of the constraint matrix, and B is the minimal capacity of the constraints. Notably, this bound is independent of the total number of constraints. We improve the algorithm to be O(alpha d^(-1/(B-1)))-competitive if both d and B are known to the algorithm beforehand

    Jamming-Resistant Learning in Wireless Networks

    Full text link
    We consider capacity maximization in wireless networks under adversarial interference conditions. There are n links, each consisting of a sender and a receiver, which repeatedly try to perform a successful transmission. In each time step, the success of attempted transmissions depends on interference conditions, which are captured by an interference model (e.g. the SINR model). Additionally, an adversarial jammer can render a (1-delta)-fraction of time steps unsuccessful. For this scenario, we analyze a framework for distributed learning algorithms to maximize the number of successful transmissions. Our main result is an algorithm based on no-regret learning converging to an O(1/delta)-approximation. It provides even a constant-factor approximation when the jammer exactly blocks a (1-delta)-fraction of time steps. In addition, we consider a stochastic jammer, for which we obtain a constant-factor approximation after a polynomial number of time steps. We also consider more general settings, in which links arrive and depart dynamically, and where each sender tries to reach multiple receivers. Our algorithms perform favorably in simulations.Comment: 22 pages, 2 figures, typos remove

    Algorithms as Mechanisms: The Price of Anarchy of Relax-and-Round

    Full text link
    Many algorithms that are originally designed without explicitly considering incentive properties are later combined with simple pricing rules and used as mechanisms. The resulting mechanisms are often natural and simple to understand. But how good are these algorithms as mechanisms? Truthful reporting of valuations is typically not a dominant strategy (certainly not with a pay-your-bid, first-price rule, but it is likely not a good strategy even with a critical value, or second-price style rule either). Our goal is to show that a wide class of approximation algorithms yields this way mechanisms with low Price of Anarchy. The seminal result of Lucier and Borodin [SODA 2010] shows that combining a greedy algorithm that is an α\alpha-approximation algorithm with a pay-your-bid payment rule yields a mechanism whose Price of Anarchy is O(α)O(\alpha). In this paper we significantly extend the class of algorithms for which such a result is available by showing that this close connection between approximation ratio on the one hand and Price of Anarchy on the other also holds for the design principle of relaxation and rounding provided that the relaxation is smooth and the rounding is oblivious. We demonstrate the far-reaching consequences of our result by showing its implications for sparse packing integer programs, such as multi-unit auctions and generalized matching, for the maximum traveling salesman problem, for combinatorial auctions, and for single source unsplittable flow problems. In all these problems our approach leads to novel simple, near-optimal mechanisms whose Price of Anarchy either matches or beats the performance guarantees of known mechanisms.Comment: Extended abstract appeared in Proc. of 16th ACM Conference on Economics and Computation (EC'15

    Think Eternally: Improved Algorithms for the Temp Secretary Problem and Extensions

    Get PDF
    The Temp Secretary Problem was recently introduced by [Fiat et al., ESA 2015]. It is a generalization of the Secretary Problem, in which commitments are temporary for a fixed duration. We present a simple online algorithm with improved performance guarantees for cases already considered by [Fiat et al., ESA 2015] and give competitive ratios for new generalizations of the problem. In the classical setting, where candidates have identical contract durations gamma << 1 and we are allowed to hire up to B candidates simultaneously, our algorithm is (1/2) - O(sqrt{gamma})-competitive. For large B, the bound improves to 1 - O(1/sqrt{B}) - O(sqrt{gamma}). Furthermore we generalize the problem from cardinality constraints towards general packing constraints. We achieve a competitive ratio of 1 - O(sqrt{(1+log(d) + log(B))/B}) - O(sqrt{gamma}), where d is the sparsity of the constraint matrix and B is generalized to the capacity ratio of linear constraints. Additionally we extend the problem towards arbitrary hiring durations. Our algorithmic approach is a relaxation that aggregates all temporal constraints into a non-temporal constraint. Then we apply a linear scaling algorithm that, on every arrival, computes a tentative solution on the input that is known up to this point. This tentative solution uses the non-temporal, relaxed constraints scaled down linearly by the amount of time that has already passed

    Prophet Secretary for Combinatorial Auctions and Matroids

    Full text link
    The secretary and the prophet inequality problems are central to the field of Stopping Theory. Recently, there has been a lot of work in generalizing these models to multiple items because of their applications in mechanism design. The most important of these generalizations are to matroids and to combinatorial auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and Feldman et al. \cite{feldman2015combinatorial} show that for adversarial arrival order of random variables the optimal prophet inequalities give a 1/21/2-approximation. For many settings, however, it's conceivable that the arrival order is chosen uniformly at random, akin to the secretary problem. For such a random arrival model, we improve upon the 1/21/2-approximation and obtain (1−1/e)(1-1/e)-approximation prophet inequalities for both matroids and combinatorial auctions. This also gives improvements to the results of Yan \cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who worked in the special cases where we can fully control the arrival order or when there is only a single item. Our techniques are threshold based. We convert our discrete problem into a continuous setting and then give a generic template on how to dynamically adjust these thresholds to lower bound the expected total welfare.Comment: Preliminary version appeared in SODA 2018. This version improves the writeup on Fixed-Threshold algorithm

    Best-response dynamics in combinatorial auctions with item bidding

    Get PDF
    In a combinatorial auction with item bidding, agents participate in multiple single-item second-price auctions at once. As some items might be substitutes, agents need to strate- gize in order to maximize their utilities. A number of results indicate that high welfare can be achieved this way, giving bounds on the welfare at equilibrium. Recently, however, criticism has been raised that equilibria are hard to compute and therefore unlikely to be attained. In this paper, we take a different perspective. We study simple best-response dynamics. That is, agents are activated one after the other and each activated agent updates his strategy myopically to a best response against the other agents’ current strategies. Often these dynamics may take exponentially long before they converge or they may not converge at all. However, as we show, convergence is not even necessary for good welfare guarantees. Given that agents’ bid updates are aggressive enough but not too aggressive, the game will remain in states of good welfare after each agent has updated his bid at least once. In more detail, we show that if agents have fractionally subadditive valuations, natural dynamics reach and remain in a state that provides a 1/3 approximation to the optimal welfare after each agent has updated his bid at least once. For subadditive valuations, we can guarantee an Ω(1/log m) approximation in case of m items that applies after each agent has updated his bid at least once and at any point after that. The latter bound is complemented by a negative result, showing that no kind of best-response dynamics can guarantee more than a an o(log log m/ log m) fraction of the optimal social welfare

    Simplified Prophet Inequalities for Combinatorial Auctions

    Full text link
    We consider prophet inequalities for XOS and MPH-kk combinatorial auctions and give a simplified proof for the existence of static and anonymous item prices which recover the state-of-the-art competitive ratios. Our proofs make use of a linear programming formulation which has a non-negative objective value if there are prices which admit a given competitive ratio α≥1\alpha \geq 1. Changing our perspective to dual space by an application of strong LP duality, we use an interpretation of the dual variables as probabilities to directly obtain our result. In contrast to previous work, our proofs do not require to argue about specific values of buyers for bundles, but only about the presence or absence of items. As a side remark, for any k≥2k \geq 2, this simplification also leads to a tiny improvement in the best competitive ratio for MPH-kk combinatorial auctions from 4k−24k-2 to 2k+2k(k−1)−12k + 2 \sqrt{k(k-1)} -1
    • …
    corecore